POV-Ray : Newsgroups : povray.advanced-users : A question of speed : Re: A question of speed Server Time
29 Jul 2024 00:35:07 EDT (-0400)
  Re: A question of speed  
From: Thorsten Froehlich
Date: 9 Jun 2003 06:31:49
Message: <3ee46215@news.povray.org>
In article <3ee33411@news.povray.org> , "Andrew Coppin" 
<orp### [at] btinternetcom> wrote:

> Why is raytracing so slow?

It is not slow.  You have to understand a very important area when dealing
with algorithms in computer science to understand why.  Actually, it isn't
too difficult if you paid attention in the advance math you had in the last
few years in high school:

The comparison of ray-tracing with scanline rendering is a bit to complex to
understand the general concept, so I will pick a well described example of
sorting algorithms.  Sorting is a very well studied problem in computer
science, and one that can very well described mathematically as well.  You
can read about it in any book about algorithms.

It is possible to show that you can sort every sequence of n elements in a
maximum of n lg n steps (this is oversimplified, see the book" Introduction
to Algorithms" for an detailed discussion).  Now, there is a common and
popular algorithm called quicksort, for which you always find an input
sequence which takes n*n steps to get sorted.  So why is everybody using
quicksort if it takes so much longer for (for a big n) than the best
solution possible?  Well, the worst-case runtime of an algorithm does not
necessarily say anything relevant about its average running-time.  And in
case of quicksort that time is still n lg n.

It can even be shown that you will always need at least n lg n steps to sort
a sequence if you need to compare elements in that sequence (that is if you
use sorting algorithms form the family of comparison sorts).  But does this
mean you cannot sort any sequence of elements in less than n log n steps?

No, you just have to use more information than just saying you have
"elements".  For example if you know your elements are integers from 1 to n,
there are algorithms which sort these in n steps!

> OK, so on the surface I know it probably sounds daft, but think about it...
> They have graphics cards that can render polygon meshes in realtime - meshes
> with just downright silly numbers of polygons in them - so why does it take
> a raytracer a few whole *seconds* to render a single sphere with 1 light
> source? OK, OK, because it's in software, not hardware, I know... but I
> played Quake II for years without hardware acceleration. (Poor deprived
> student... *sniff*) So I say again, WHY is raytracing slower?

It has nothing to do with software or hardware at all.  The fundamental
property, the runtime bounds, cannot be changed.  But there is two more
things you need to know that I didn't mention above.

Every step you need to sort takes time.  And, obviously, it depends on a lot
of things how long a step takes.  In fact, depending on the algorithm, a
step can be relatively slow or fast.

Another thing I did not mention so far is that you cannot use average
runtime to compare algorithms where the elements are dissimilar.  Or, more
complex, the runtime is a combination of several dissimilar elements.  And
this is why your assumption that ray-tracing "is slower" based on looking at
a simple scene is misguided.

For (simple) ray-tracing you have to test for all the m pixels each of the n
objects in the scene, so the runtime to render will be about m*n.  For
scanline rendering you have n objects which consist of t triangles which
consist of p pixels, which itself is m/q, where q is the share of each
triangle of the overall image (one that is counts all *drawn* triangle
pixels, not all visible triangle pixels, you need to draw pixels to
determine their visibility, usually many pixels over each other and
selecting the closes pixel).  So the runtime to render will be about n*t*p.

Notice something?  For ray-tracing the render-time is directly proportional
to the number of pixels in the image.  But for scanline rendering it is not.
On the other hand, for ray-tracing the number of objects is just one part of
the equation.

So, to say anything about these two algorithms, we do need the constant
factors, and this is where your "is slower" statement can be shown to be
relative.  That is, we say c1*m*n and c2*n*t*m/q, or lets say q is a
constant fact, i.e. each triangle contributes 1/10000 to the image, so
c2*n*t*m/10000, which is simply c2*n*t*m.  Further, we assume every object
consists of (very few) triangles, say 10.  So we get, after eliminating
constants, c2*n*m.

See, something about the runtime now?  By setting some constants we were
able to make the two equations comparable!  And they now take both the same
time.  But wait, there are these annoying constant factors.  It turns out
that c2 is much smaller than c1.  So this shows ray-tracing will always be
slower?  Well, yes and no.  It shows that the simple algorithm I presented
as "ray-tracing" will always be slower.  Lets pick up at the number of
objects in our ray-tracing runtime estimation again.  Do we really need to
test each and every object each and every time?

No!  There are various ways to optimise the finding of the closes object.
Lets consider a very simple algorithm:  Assume all objects are approximately
evenly  distributed in a specific volume in the scene.  So subdivide the
space in the scene into cubes (like a grid on paper, but 3d), with an
average number of q objects in each cube.  So you have r*r*r = n/q cubes.
Now, to find an intersection you only need to follow a line (the ray)
through these cubes, and only those cubes you touch need to be checked.  The
longest line in the space consisting of r*r*r cubes is sqrt(r*r*3), which in
turn is n^(1/3), as we again, ignore constant factors (because they don't
change their constness).

So now the ray-tracing runtime is c2*m*sqrt(n^(1/3)) = c2*m*n^(1/6).  If you
compare this to the scanline rendering runtime of c1*m*n, do you notice what
will happen, even if you assume c1 = 1000000 and c2 = 1?  You simply pick an
n, that is the number of objects, big enough and the runtime of the
ray-tracing algorithm will be smaller than that of the scanline algorithm.

The problem is, for the constants I suggested, you need a really big n.  So,
only if you have a really lot of objects, ray-tracing will be faster.

Analysing algorithm depends on defining a function of their runtime and the
comparing the growth of their functions.  And as you know from school, just
looking at one, or even many of samples of a function will not tell you
much, if anything about the growths of a function!

    Thorsten

____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde

Visit POV-Ray on the web: http://mac.povray.org


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.